183 research outputs found

    Alpha-band rhythms in visual task performance: phase-locking by rhythmic sensory stimulation

    Get PDF
    Oscillations are an important aspect of neuronal activity. Interestingly, oscillatory patterns are also observed in behaviour, such as in visual performance measures after the presentation of a brief sensory event in the visual or another modality. These oscillations in visual performance cycle at the typical frequencies of brain rhythms, suggesting that perception may be closely linked to brain oscillations. We here investigated this link for a prominent rhythm of the visual system (the alpha-rhythm, 8-12 Hz) by applying rhythmic visual stimulation at alpha-frequency (10.6 Hz), known to lead to a resonance response in visual areas, and testing its effects on subsequent visual target discrimination. Our data show that rhythmic visual stimulation at 10.6 Hz: 1) has specific behavioral consequences, relative to stimulation at control frequencies (3.9 Hz, 7.1 Hz, 14.2 Hz), and 2) leads to alpha-band oscillations in visual performance measures, that 3) correlate in precise frequency across individuals with resting alpha-rhythms recorded over parieto-occipital areas. The most parsimonious explanation for these three findings is entrainment (phase-locking) of ongoing perceptually relevant alpha-band brain oscillations by rhythmic sensory events. These findings are in line with occipital alpha-oscillations underlying periodicity in visual performance, and suggest that rhythmic stimulation at frequencies of intrinsic brain-rhythms can be used to reveal influences of these rhythms on task performance to study their functional roles

    Event-related alpha suppression in response to facial motion

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.While biological motion refers to both face and body movements, little is known about the visual perception of facial motion. We therefore examined alpha wave suppression as a reduction in power is thought to reflect visual activity, in addition to attentional reorienting and memory processes. Nineteen neurologically healthy adults were tested on their ability to discriminate between successive facial motion captures. These animations exhibited both rigid and non-rigid facial motion, as well as speech expressions. The structural and surface appearance of these facial animations did not differ, thus participants decisions were based solely on differences in facial movements. Upright, orientation-inverted and luminance-inverted facial stimuli were compared. At occipital and parieto-occipital regions, upright facial motion evoked a transient increase in alpha which was then followed by a significant reduction. This finding is discussed in terms of neural efficiency, gating mechanisms and neural synchronization. Moreover, there was no difference in the amount of alpha suppression evoked by each facial stimulus at occipital regions, suggesting early visual processing remains unaffected by manipulation paradigms. However, upright facial motion evoked greater suppression at parieto-occipital sites, and did so in the shortest latency. Increased activity within this region may reflect higher attentional reorienting to natural facial motion but also involvement of areas associated with the visual control of body effectors. © 2014 Girges et al

    Age-related delay in information accrual for faces: Evidence from a parametric, single-trial EEG approach

    Get PDF
    Background: In this study, we quantified age-related changes in the time-course of face processing by means of an innovative single-trial ERP approach. Unlike analyses used in previous studies, our approach does not rely on peak measurements and can provide a more sensitive measure of processing delays. Young and old adults (mean ages 22 and 70 years) performed a non-speeded discrimination task between two faces. The phase spectrum of these faces was manipulated parametrically to create pictures that ranged between pure noise (0% phase information) and the undistorted signal (100% phase information), with five intermediate steps. Results: Behavioural 75% correct thresholds were on average lower, and maximum accuracy was higher, in younger than older observers. ERPs from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The earliest age-related ERP differences occurred in the time window of the N170. Older observers had a significantly stronger N170 in response to noise, but this age difference decreased with increasing phase information. Overall, manipulating image phase information had a greater effect on ERPs from younger observers, which was quantified using a hierarchical modelling approach. Importantly, visual activity was modulated by the same stimulus parameters in younger and older subjects. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed a significantly slower processing in older observers starting around 120 ms after stimulus onset. This age-related delay increased over time to reach a maximum around 190 ms, at which latency younger observers had around 50 ms time lead over older observers. Conclusion: Using a component-free ERP analysis that provides a precise timing of the visual system sensitivity to image structure, the current study demonstrates that older observers accumulate face information more slowly than younger subjects. Additionally, the N170 appears to be less face-sensitive in older observers

    Benefits of Stimulus Congruency for Multisensory Facilitation of Visual Learning

    Get PDF
    Background. Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. Methodology/Principle Findings. Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. Conclusions/ Significance. This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level

    A chronometric exploration of high-resolution ‘sensitive TMS masking’ effects on subjective and objective measures of vision

    Get PDF
    Transcranial magnetic stimulation (TMS) can induce masking by interfering with ongoing neural activity in early visual cortex. Previous work has explored the chronometry of occipital involvement in vision by using single pulses of TMS with high temporal resolution. However, conventionally TMS intensities have been high and the only measure used to evaluate masking was objective in nature. Recent studies have begun to incorporate subjective measures of vision, alongside objective ones. The current study goes beyond previous work in two regards. First, we explored both objective vision (an orientation discrimination task) and subjective vision (a stimulus visibility rating on a four-point scale), across a wide range of time windows with high temporal resolution. Second, we used a very sensitive TMS-masking paradigm: stimulation was at relatively low TMS intensities, with a figure-8 coil, and the small stimulus was difficult to discriminate already at baseline level. We hypothesized that this should increase the effective temporal resolution of our paradigm. Perhaps for this reason, we are able to report a rather interesting masking curve. Within the classical-masking time window, previously reported to encompass broad SOAs anywhere between 60 and 120 ms, we report not one, but at least two dips in objective performance, with no masking in-between. The subjective measure of vision did not mirror this pattern. These preliminary data from our exploratory design suggest that, with sensitive TMS masking, we might be able to reveal visual processes in early visual cortex previously unreported

    What Happens in Between? Human Oscillatory Brain Activity Related to Crossmodal Spatial Cueing

    Get PDF
    Previous studies investigated the effects of crossmodal spatial attention by comparing the responses to validly versus invalidly cued target stimuli. Dynamics of cortical rhythms in the time interval between cue and target might contribute to cue effects on performance. Here, we studied the influence of spatial attention on ongoing oscillatory brain activity in the interval between cue and target onset. In a first experiment, subjects underwent periods of tactile stimulation (cue) followed by visual stimulation (target) in a spatial cueing task as well as tactile stimulation as a control. In a second experiment, cue validity was modified to be 50%, 75%, or else 25%, to separate effects of exogenous shifts of attention caused by tactile stimuli from that of endogenous shifts. Tactile stimuli produced: 1) a stronger lateralization of the sensorimotor beta-rhythm rebound (15–22 Hz) after tactile stimuli serving as cues versus not serving as cues; 2) a suppression of the occipital alpha-rhythm (7–13 Hz) appearing only in the cueing task (this suppression was stronger contralateral to the endogenously attended side and was predictive of behavioral success); 3) an increase of prefrontal gamma-activity (25–35 Hz) specifically in the cueing task. We measured cue-related modulations of cortical rhythms which may accompany crossmodal spatial attention, expectation or decision, and therefore contribute to cue validity effects. The clearly lateralized alpha suppression after tactile cues in our data indicates its dependence on endogenous rather than exogenous shifts of visuo-spatial attention following a cue independent of its modality

    The Impact of Spatial Incongruence on an Auditory-Visual Illusion

    Get PDF
    The sound-induced flash illusion is an auditory-visual illusion--when a single flash is presented along with two or more beeps, observers report seeing two or more flashes. Previous research has shown that the illusion gradually disappears as the temporal delay between auditory and visual stimuli increases, suggesting that the illusion is consistent with existing temporal rules of neural activation in the superior colliculus to multisensory stimuli. However little is known about the effect of spatial incongruence, and whether the illusion follows the corresponding spatial rule. If the illusion occurs less strongly when auditory and visual stimuli are separated, then integrative processes supporting the illusion must be strongly dependant on spatial congruence. In this case, the illusion would be consistent with both the spatial and temporal rules describing response properties of multisensory neurons in the superior colliculus.status: publishe

    Neural correlates of audiovisual motion capture

    Get PDF
    Visual motion can affect the perceived direction of auditory motion (i.e., audiovisual motion capture). It is debated, though, whether this effect occurs at perceptual or decisional stages. Here, we examined the neural consequences of audiovisual motion capture using the mismatch negativity (MMN), an event-related brain potential reflecting pre-attentive auditory deviance detection. In an auditory-only condition occasional changes in the direction of a moving sound (deviant) elicited an MMN starting around 150 ms. In an audiovisual condition, auditory standards and deviants were synchronized with a visual stimulus that moved in the same direction as the auditory standards. These audiovisual deviants did not evoke an MMN, indicating that visual motion reduced the perceptual difference between sound motion of standards and deviants. The inhibition of the MMN by visual motion provides evidence that auditory and visual motion signals are integrated at early sensory processing stages

    Stay Tuned: What Is Special About Not Shifting Attention?

    Get PDF
    Background: When studying attentional orienting processes, brain activity elicited by symbolic cue is usually compared to a neutral condition in which no information is provided about the upcoming target location. It is generally assumed that when a neutral cue is provided, participants do not shift their attention. The present study sought to validate this assumption. We further investigated whether anticipated task demands had an impact on brain activity related to processing symbolic cues. Methodology/Principal Findings: Two experiments were conducted, during which event-related potentials were elicited by symbolic cues that instructed participants to shift their attention to a particular location on a computer screen. In Experiment 1, attention shift-inducing cues were compared to non-informative cues, while in both conditions participants were required to detect target stimuli that were subsequently presented at peripheral locations. In Experiment 2, a non-ambiguous "stay-central'' cue that explicitly required participants not to shift their attention was used instead. In the latter case, target stimuli that followed a stay-central cue were also presented at a central location. Both experiments revealed enlarged early latency contralateral ERP components to shift-inducing cues compared to those elicited by either non-informative (exp. 1) or stay-central cues (exp. 2). In addition, cueing effects were modulated by the anticipated difficulty of the upcoming target, particularly so in Experiment 2. A positive difference, predominantly over the posterior contralateral scalp areas, could be observed for stay-central cues, especially for those predicting that the upcoming target would be easy. This effect was not present for non-informative cues. Conclusions/Significance: We interpret our result in terms of a more rapid engagement of attention occurring in the presence of a more predictive instruction (i.e. stay-central easy target). Our results indicate that the human brain is capable of very rapidly identifying the difference between different types of instructions
    corecore